Goto

Collaborating Authors

 Social Development & Welfare


Who Is Lagging Behind: Profiling Student Behaviors with Graph-Level Encoding in Curriculum-Based Online Learning Systems

Xiao, Qian, Breathnach, Conn, Ghergulescu, Ioana, O'Sullivan, Conor, Johnston, Keith, Wade, Vincent

arXiv.org Artificial Intelligence

The surge in the adoption of Intelligent Tutoring Systems (ITSs) in education, while being integral to curriculum-based learning, can inadvertently exacerbate performance gaps. To address this problem, student profiling becomes crucial for tracking progress, identifying struggling students, and alleviating disparities among students. Such profiling requires measuring student behaviors and performance across different aspects, such as content coverage, learning intensity, and proficiency in different concepts within a learning topic. In this study, we introduce CTGraph, a graph-level representation learning approach to profile learner behaviors and performance in a self-supervised manner. Our experiments demonstrate that CTGraph can provide a holistic view of student learning journeys, accounting for different aspects of student behaviors and performance, as well as variations in their learning paths as aligned to the curriculum structure. We also show that our approach can identify struggling students and provide comparative analysis of diverse groups to pinpoint when and where students are struggling. As such, our approach opens more opportunities to empower educators with rich insights into student learning journeys and paves the way for more targeted interventions.


Predicting ChatGPT Use in Assignments: Implications for AI-Aware Assessment Design

Das, Surajit, Eliseev, Aleksei

arXiv.org Artificial Intelligence

The rise of generative AI tools like ChatGPT has significantly reshaped education, sparking debates about their impact on learning outcomes and academic integrity. While prior research highlights opportunities and risks, there remains a lack of quantitative analysis of student behavior when completing assignments. Understanding how these tools influence real-world academic practices, particularly assignment preparation, is a pressing and timely research priority. This study addresses this gap by analyzing survey responses from 388 university students, primarily from Russia, including a subset of international participants. Using the XGBoost algorithm, we modeled predictors of ChatGPT usage in academic assignments. Key predictive factors included learning habits, subject preferences, and student attitudes toward AI. Our binary classifier demonstrated strong predictive performance, achieving 80.1\% test accuracy, with 80.2\% sensitivity and 79.9\% specificity. The multiclass classifier achieved 64.5\% test accuracy, 64.6\% weighted precision, and 64.5\% recall, with similar training scores, indicating potential data scarcity challenges. The study reveals that frequent use of ChatGPT for learning new concepts correlates with potential overreliance, raising concerns about long-term academic independence. These findings suggest that while generative AI can enhance access to knowledge, unchecked reliance may erode critical thinking and originality. We propose discipline-specific guidelines and reimagined assessment strategies to balance innovation with academic rigor. These insights can guide educators and policymakers in ethically and effectively integrating AI into education.


High-Performance Parallel Optimization of the Fish School Behaviour on the Setonix Platform Using OpenMP

Wang, Haitian, Qin, Long

arXiv.org Artificial Intelligence

This paper presents an in-depth investigation into the high-performance parallel optimization of the Fish School Behaviour (FSB) algorithm on the Setonix supercomputing platform using the OpenMP framework. Given the increasing demand for enhanced computational capabilities for complex, large-scale calculations across diverse domains, there's an imperative need for optimized parallel algorithms and computing structures. The FSB algorithm, inspired by nature's social behavior patterns, provides an ideal platform for parallelization due to its iterative and computationally intensive nature. This study leverages the capabilities of the Setonix platform and the OpenMP framework to analyze various aspects of multi-threading, such as thread counts, scheduling strategies, and OpenMP constructs, aiming to discern patterns and strategies that can elevate program performance. Experiments were designed to rigorously test different configurations, and our results not only offer insights for parallel optimization of FSB on Setonix but also provide valuable references for other parallel computational research using OpenMP. Looking forward, other factors, such as cache behavior and thread scheduling strategies at micro and macro levels, hold potential for further exploration and optimization.


Japan's Lower House passes AI promotion bill

The Japan Times

The House of Representatives, Japan's lower chamber of parliament, passed a bill on Thursday to promote the development of artificial intelligence technology and take steps to mitigate its risks. The legislation is expected to be enacted during the current parliamentary session set to end in June after deliberations at the House of Councilors, the upper chamber. AI "will be the foundation of economic and social development and is an important technology from the viewpoint of security," the bill said.

  Country: Asia > Japan (0.72)
  Industry:

From Motion Signals to Insights: A Unified Framework for Student Behavior Analysis and Feedback in Physical Education Classes

Gao, Xian, Ruan, Jiacheng, Gao, Jingsheng, Xie, Mingye, Zhang, Zongyun, Liu, Ting, Fu, Yuzhuo

arXiv.org Artificial Intelligence

Analyzing student behavior in educational scenarios is crucial for enhancing teaching quality and student engagement. Existing AI-based models often rely on classroom video footage to identify and analyze student behavior. While these video-based methods can partially capture and analyze student actions, they struggle to accurately track each student's actions in physical education classes, which take place in outdoor, open spaces with diverse activities, and are challenging to generalize to the specialized technical movements involved in these settings. Furthermore, current methods typically lack the ability to integrate specialized pedagogical knowledge, limiting their ability to provide in-depth insights into student behavior and offer feedback for optimizing instructional design. To address these limitations, we propose a unified end-to-end framework that leverages human activity recognition technologies based on motion signals, combined with advanced large language models, to conduct more detailed analyses and feedback of student behavior in physical education classes. Our framework begins with the teacher's instructional designs and the motion signals from students during physical education sessions, ultimately generating automated reports with teaching insights and suggestions for improving both learning and class instructions. This solution provides a motion signal-based approach for analyzing student behavior and optimizing instructional design tailored to physical education classes. Experimental results demonstrate that our framework can accurately identify student behaviors and produce meaningful pedagogical insights.


Detecting AI-Generated Text in Educational Content: Leveraging Machine Learning and Explainable AI for Academic Integrity

Najjar, Ayat A., Ashqar, Huthaifa I., Darwish, Omar A., Hammad, Eman

arXiv.org Artificial Intelligence

This study seeks to enhance academic integrity by providing tools to detect AI-generated content in student work using advanced technologies. The findings promote transparency and accountability, helping educators maintain ethical standards and supporting the responsible integration of AI in education. A key contribution of this work is the generation of the CyberHumanAI dataset, which has 1000 observations, 500 of which are written by humans and the other 500 produced by ChatGPT. We evaluate various machine learning (ML) and deep learning (DL) algorithms on the CyberHumanAI dataset comparing human-written and AI-generated content from Large Language Models (LLMs) (i.e., ChatGPT). Results demonstrate that traditional ML algorithms, specifically XGBoost and Random Forest, achieve high performance (83% and 81% accuracies respectively). Results also show that classifying shorter content seems to be more challenging than classifying longer content. Further, using Explainable Artificial Intelligence (XAI) we identify discriminative features influencing the ML model's predictions, where human-written content tends to use a practical language (e.g., use and allow). Meanwhile AI-generated text is characterized by more abstract and formal terms (e.g., realm and employ). Finally, a comparative analysis with GPTZero show that our narrowly focused, simple, and fine-tuned model can outperform generalized systems like GPTZero. The proposed model achieved approximately 77.5% accuracy compared to GPTZero's 48.5% accuracy when tasked to classify Pure AI, Pure Human, and mixed class. GPTZero showed a tendency to classify challenging and small-content cases as either mixed or unrecognized while our proposed model showed a more balanced performance across the three classes. Keywords: LLMs, Digital Technology, Education, Plagiarism, Human AI 1. Introduction Our communication practices are quickly changing due to the emergence of generative AI models.


An Open Data Platform to Advance Gender Equality in STEM in Latin America

Communications of the ACM

Expanding the involvement of women in Science, Technology, Engineering, and Mathematics (STEM) across Latin America is crucial for economic advancement, social equity, and global competitiveness; however, these efforts have proven to be challenging. Women in the region are underrepresented in STEM10 and even more so in leadership positions.17,18 The limited availability of current information and the difficulties associated with obtaining reliable data to mitigate gender disparities create difficulties in implementing policies to reduce the gender gap in STEM. Researchers, organizations, and policymakers working to reduce the gender gap need access to dependable data to understand the root causes of gender disparities, promote evidence-based interventions, and increase accountability and transparency. In the quest for solutions to these challenges, an international research network between Bolivia, Brazil, and Peru, "Equality in Leadership for Latin America STEM" (ELLAS), emerged in 2022.6


The Great AI Witch Hunt: Reviewers Perception and (Mis)Conception of Generative AI in Research Writing

Hadan, Hilda, Wang, Derrick, Mogavi, Reza Hadi, Tu, Joseph, Zhang-Kennedy, Leah, Nacke, Lennart E.

arXiv.org Artificial Intelligence

Since the release of ChatGPT in November 2022 [61], GenAI has become increasingly popular in assisting people with written, auditory, and visual tasks [45, 58, 78]. In research, GenAI offers a new approach to manuscript writing, as it can handle tasks ranging from text improvement suggestions to speech-to-text translation and even crafting initial drafts [45, 52]. Its ability to understand context and generate human-like and grammatically accurate responses fosters innovative brainstorming and enhances the quality and readability of research publications [5]. However, along with GenAI's potential to augment research activities, concerns about transparency, academic integrity, and the urgency of maintaining the credibility of research work have emerged [21, 54, 73, 78]. Despite the growing interest in using GenAI for manuscript writing and research activities [45, 64], many researchers hesitate to acknowledge its use in their papers. This is illustrated by several instances where research publications with undisclosed GenAI use were identified by readers (e.g., [53, 71, 72, 79]). Studies have identified the phenomenon of AI aversion, where AI-generated content, even if factual, is often perceived as inaccurate and misleading [12, 56] and disclosing its use can negatively impact readers' satisfaction and perception of the authors' qualifications and effort [69]. Therefore, researchers' hesitancy is partly due to their fear that acknowledging GenAI use might damage


Survey on Plagiarism Detection in Large Language Models: The Impact of ChatGPT and Gemini on Academic Integrity

Pudasaini, Shushanta, Miralles-Pechuán, Luis, Lillis, David, Salvador, Marisa Llorens

arXiv.org Artificial Intelligence

The rise of Large Language Models (LLMs) such as ChatGPT and Gemini has posed new challenges for the academic community. With the help of these models, students can easily complete their assignments and exams, while educators struggle to detect AI-generated content. This has led to a surge in academic misconduct, as students present work generated by LLMs as their own, without putting in the effort required for learning. As AI tools become more advanced and produce increasingly human-like text, detecting such content becomes more challenging. This development has significantly impacted the academic world, where many educators are finding it difficult to adapt their assessment methods to this challenge. This research first demonstrates how LLMs have increased academic dishonesty, and then reviews state-of-the-art solutions for academic plagiarism in detail. A survey of datasets, algorithms, tools, and evasion strategies for plagiarism detection has been conducted, focusing on how LLMs and AI-generated content (AIGC) detection have affected this area. The survey aims to identify the gaps in existing solutions. Lastly, potential long-term solutions are presented to address the issue of academic plagiarism using LLMs based on AI tools and educational approaches in an ever-changing world.


Automatic question generation for propositional logical equivalences

Yang, Yicheng, Wang, Xinyu, Yu, Haoming, Li, Zhiyuan

arXiv.org Artificial Intelligence

The increase in academic dishonesty cases among college students has raised concern, particularly due to the shift towards online learning caused by the pandemic. We aim to develop and implement a method capable of generating tailored questions for each student. The use of Automatic Question Generation (AQG) is a possible solution. Previous studies have investigated AQG frameworks in education, which include validity, user-defined difficulty, and personalized problem generation. Our new AQG approach produces logical equivalence problems for Discrete Mathematics, which is a core course for year-one computer science students. This approach utilizes a syntactic grammar and a semantic attribute system through top-down parsing and syntax tree transformations. Our experiments show that the difficulty level of questions generated by our AQG approach is similar to the questions presented to students in the textbook [1]. These results confirm the practicality of our AQG approach for automated question generation in education, with the potential to significantly enhance learning experiences.